Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 175: 106310, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38663301

RESUMEN

Thermal infrared detectors have a vast array of potential applications in pedestrian detection and autonomous driving, and their safety performance is of great concern. Recent works use bulb plate, "QR" suit, and infrared patches as physical perturbations to perform white-box attacks on thermal infrared detectors, which are effective but not practical for real-world scenarios. Some researchers have tried to utilize hot and cold blocks as physical perturbations for black-box attacks on thermal infrared detectors. However, this attempts has not yielded robust and multi-view physical attacks, indicating limitations in the approach. To overcome the limitations of existing approaches, we introduce a novel black-box physical attack method, called adversarial infrared blocks (AdvIB). By optimizing the physical parameters of the infrared blocks and deploying them to pedestrians from multiple views, including the front, side, and back, AdvIB can execute robust and multi-view attacks on thermal infrared detectors. Our physical tests show that the proposed method achieves a success rate of over 80% under most distance and view conditions, validating its effectiveness. For stealthiness, our method involves attaching the adversarial infrared block to the inside of clothing, enhancing its stealthiness. Additionally, we perform comprehensive experiments and compare the experimental results with baseline to verify the robustness of our method. In summary, AdvIB allows for potent multi-view black-box attacks, profoundly influencing ethical considerations in today's society. Potential consequences, including disasters from technology misuse and attackers' legal liability, highlight crucial ethical and security issues associated with AdvIB. Considering these concerns, we urge heightened attention to the proposed AdvIB. Our code can be accessed from the following link: https://github.com/ChengYinHu/AdvIB.git.


Asunto(s)
Rayos Infrarrojos , Humanos , Seguridad Computacional , Algoritmos , Peatones , Redes Neurales de la Computación , Conducción de Automóvil
2.
IEEE Trans Image Process ; 33: 722-737, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38150348

RESUMEN

Deep neural networks (DNNs) are shown to be vulnerable to universal adversarial perturbations (UAP), a single quasi-imperceptible perturbation that deceives the DNNs on most input images. The current UAP methods can be divided into data-dependent and data-independent methods. The former exhibits weak transferability in black-box models due to overly relying on model-specific features. The latter shows inferior attack performance in white-box models as it fails to exploit the model's response information to benign images. To address the above issues, this paper proposes a novel universal adversarial attack to generate UAP with strong transferability by disrupting the model-agnostic features (e.g., edges or simple texture), which are invariant to the models. Specifically, we first devise an objective function to weaken the significant channel-wise features and strengthen the less significant channel-wise features, which are partitioned by the designed strategy. Furthermore, the proposed objective function eliminates the dependency on labeled samples, allowing us to utilize out-of-distribution (OOD) data to train UAP. To enhance the attack performance with limited training samples, we exploit the average gradient of the mini-batch input to update the UAP iteratively, which encourages the UAP to capture the local information inside the mini-batch input. In addition, we introduce the momentum term to accumulate the gradient information at each iterative step for the purpose of perceiving the global information over the training set. Finally, extensive experimental results demonstrate that the proposed methods outperform the existing UAP approaches. Additionally, we exhaustively investigate the transferability of the UAP across models, datasets, and tasks.

3.
Neural Netw ; 163: 256-271, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37086543

RESUMEN

Deep neural network-based object detectors are vulnerable to adversarial examples. Among existing works to fool object detectors, the camouflage-based method is more often adopted due to its adaptation to multi-view scenarios and non-planar objects. However, most of them can still be easily observed by human eyes, which limits their application in the real world. To fool human eyes and object detectors simultaneously, we propose a differential evolution based dual adversarial camouflage method. Specifically, we try to obtain the camouflage texture by the two-stage training, which can be wrapped over the surface of the object. In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene background, making human eyes difficult to distinguish. In the second stage, we design three loss functions to optimize the local texture, which is selected from the global texture, making object detectors ineffective. In addition, we introduce the differential evolution algorithm to search for the near-optimal areas of the object to attack, improving the adversarial performance under certain attack area limitations. Experimental results show that our proposed method can obtain a good trade-off between fooling human eyes and object detectors under multiple specific scenes and objects.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...